Goto

Collaborating Authors

 teach ai


Lego's latest educational kit seeks to teach AI as part of computer science, not to build a chatbot

Engadget

Lego also recognized that it had to build a course that'll work regardless of a teacher's fluency in such subjects. So a big part of developing the course was making sure that teachers had the tools they needed to be on top of whatever lessons they're working on. "When we design and we test the products, we're not the ones testing in the classroom," Silwinski said. "We give it to a teacher and we provide all of the lesson materials, all of the training, all of the notes, all the presentation materials, everything that they need to be able to teach the lesson." Lego also took into account the fact that some schools might introduce its students to these things starting in Kindergarten, whereas others might skip to the grade 3-5 or 6-8 sets.


A Self-Efficacy Theory-based Study on the Teachers Readiness to Teach Artificial Intelligence in Public Schools in Sri Lanka

Rajapakse, Chathura, Ariyarathna, Wathsala, Selvakan, Shanmugalingam

arXiv.org Artificial Intelligence

The need for and challenges of teaching artificial intelligence (AI) at primary, secondary, and upper-secondary levels have been a major focus of recent academic discussions [1],[2],[3]. Often referred to as AI4K12 [4], this area explores global initiatives that introduce AI to students from kindergarten through high school. The rapid advancements in deep learning and generative AI technologies suggest AI will become a transformative force. This realisation has prompted governments and policymakers to recognise the need to prepare future citizens for a world heavily influenced by AI. As AI becomes increasingly integrated into information systems, concerns are mounting about citizens' ability to use these systems responsibly and understand the consequences of not doing so [5]. Furthermore, anxieties regarding AI's potential impact on societal sustainability highlight the need to equip future workforces with the skills to combine human creativity with AI's potential to create sustainable systems.


What babies can teach AI

MIT Technology Review

But what if an AI could learn like a baby? AI models are trained on vast data sets consisting of billions of data points. Researchers at New York University wanted to see what such models could do when they were trained on a much smaller data set: the sights and sounds experienced by a single child learning to talk. To their surprise, their AI learned a lot--thanks to a curious baby called Sam. The researchers strapped a camera on Sam's head, and he wore it off and on for one and a half years, from the time he was six months old until a little after his second birthday. The material he collected allowed the researchers to teach a neural network to match words to the objects they represent, reports Cassandra Willyard in this story.


Scientist use 6-month-old baby named Sam to teach AI how humanity develops - amid fears tech could destroy us

Daily Mail - Science & tech

Scientists trained an AI through the eyes of a baby in an effort to teach the tech how humanity develops - amid fears it could destroy us. Researchers at New York University strapped a headcam recorder to Sam when he was just six months old through his second birthday. The footage of 250,000 words and corresponding images was fed to an AI model, which learned how to recognize different objects similar to how Sam did. The AI developed its knowledge in the same way the child did - by observing the environment, listening to nearby people and connecting dots between what was seen and heard. The experiment also determined the connection between visual and linguistic representation in the development of a child.


The Download: how babies can teach AI, and new mRNA vaccines

MIT Technology Review

Human babies are far better at learning than even the very best large language models. To be able to write in passable English, ChatGPT had to be trained on massive data sets that contain millions upon millions of words. Children, on the other hand, have access to only a tiny fraction of that data, yet by age three they're communicating in quite sophisticated ways. A team of researchers at New York University wondered if AI could learn like a baby. What could an AI model do when given a far smaller data set--the sights and sounds experienced by a single child learning to talk?


The Computer Scientist Trying to Teach AI to Learn Like We Do

#artificialintelligence

Artificial intelligence algorithms are designed to learn in fits and starts. Instead of continuously updating their knowledge base with new information over time as humans do, algorithms can learn only during the training phase. After that, their knowledge remains frozen; they perform the task they were trained for without being able to keep learning as they do it. To learn even one new thing, algorithms must be trained again from scratch. It's as if every time you met a new person, the only way you could learn her name would be to reboot your brain.


Meta's new learning algorithm can teach AI to multi-task

#artificialintelligence

If you can recognize a dog by sight, then you can probably recognize a dog when it is described to you in words. Deep neural networks have become very good at identifying objects in photos and conversing in natural language, but not at the same time: there are AI models that excel at one or the other, but not both. Part of the problem is that these models learn different skills using different techniques. This is a major obstacle for the development of more general-purpose AI, machines that can multi-task and adapt. It also means that advances in deep learning for one skill often do not transfer to others.


Meta's new learning algorithm can teach AI to multi-task

#artificialintelligence

Data2vec is part of a big trend in AI toward models that can learn to understand the world in more than one way. "It's a clever idea," says Ani Kembhavi at the Allen Institute for AI in Seattle, who works on vision and language. "It's a promising advance when it comes to generalized systems for learning." An important caveat is that although the same learning algorithm can be used for different skills, it can only learn one skill at a time. Once it has learned to recognize images, it must start from scratch to learn to recognize speech.


CausalCity: Simulated city aims to teach AI "counterfactual reasoning"

#artificialintelligence

AI algorithms struggle to recognise events or objects in contexts that are different from the training set. A situation in the world is something that has no boundaries at all, you don't know what's in the situation, what's out of the situation.") If you're trying to train an AI to deal with this "unframed" world, you run into a lot of challenges. Humans learn about causal relationships by making interventions/actions in a given environment, observing the result, then refining the mental model they've "built" by making similar actions in subtly different environments in the great, fluid thing that is The World. It's hard to build AI training sets that can help algorithms "understand" the myriad causal relationships taking place at any given time in a similar way; rather than train them to understand more fixed patterns of behaviour: e.g. the hard numbers that need to be crunched to beat a human in a game of tightly circumscribed mathematical probabilities like chess.


Can we teach AI how to code? Welcome to IBM's Project CodeNet

#artificialintelligence

IBM's AI research division has released a 14-million-sample dataset to develop machine learning models that can help in programming tasks. Called Project CodeNet, the dataset takes its name after ImageNet, the famous repository of labeled photos that triggered a revolution in computer vision and deep learning. While there's a scant chance that machine learning models built on the CodeNet dataset will make human programmers redundant, there's reason to be hopeful that they will make developers more productive. In the early 2010s, impressive advances in machine learning triggered excitement (and fear) about artificial intelligence soon automating many tasks, including programming. But AI's penetration in software development has been extremely limited.